In a fully distributed data processing system, work requests may be generated by either users or executing processes. The servicing of each work request requires the use of a set or collection of system resources. Multiple copies of the resources required may be present within the overall system. The problem is to select the specific instances of these resources (needed for the execution of each process generated from the work request) so that average user response time will be minimized and system throughput will be maximized. This thesis investigates the work distribution problem by using a simulation model to analyze the performance of three work distribution algorithms in test cases which simulated various network conditions. The first work distribution algorithm attempts to minimize communications between the network nodes, the second algorithms tries to balance the processing load on the processors at each node, and the third algorithm is a combination of the other two. The results of the simulation experiment showed that the algorithm that attempts to minimize communications was better than the other two algorithms in terms of minimizing average user response time under the specific conditions tested. The performance of this algorithm was especially good with low bandwidths, with a global bus topology, and with multiple file copies.